140 research outputs found

    Visualization system requirements for data processing pipeline design and optimization

    Get PDF
    The rising quantity and complexity of data creates a need to design and optimize data processing pipelines – the set of data processing steps, parameters and algorithms that perform operations on the data. Visualization can support this process but, although there are many examples of systems for visual parameter analysis, there remains a need to systematically assess users’ requirements and match those requirements to exemplar visualization methods. This article presents a new characterization of the requirements for pipeline design and optimization. This characterization is based on both a review of the literature and first-hand assessment of eight application case studies. We also match these requirements with exemplar functionality provided by existing visualization tools. Thus, we provide end-users and visualization developers with a way of identifying functionality that addresses data processing problems in an application. We also identify seven future challenges for visualization research that are not met by the capabilities of today’s systems

    Using Dashboard Networks to Visualize Multiple Patient Histories: A Design Study on Post-operative Prostate Cancer

    Get PDF
    In this design study, we present a visualization technique that segments patients' histories instead of treating them as raw event sequences, aggregates the segments using criteria such as the whole history or treatment combinations, and then visualizes the aggregated segments as static dashboards that are arranged in a dashboard network to show longitudinal changes. The static dashboards were developed in nine iterations, to show 15 important attributes from the patients' histories. The final design was evaluated with five non-experts, five visualization experts and four medical experts, who successfully used it to gain an overview of a 2,000 patient dataset, and to make observations about longitudinal changes and differences between two cohorts. The research represents a step-change in the detail of large-scale data that may be successfully visualized using dashboards, and provides guidance about how the approach may be generalized

    PETMiner - A visual analysis tool for petrophysical properties of core sample data

    Get PDF
    The aim of the PETMiner software is to reduce the time and monetary cost of analysing petrophysical data that is obtained from reservoir sample cores. Analysis of these data requires tacit knowledge to fill ‘gaps’ so that predictions can be made for incomplete data. Through discussions with 30 industry and academic specialists, we identified three analysis use cases that exemplified the limitations of current petrophysics analysis tools. We used those use cases to develop nine core requirements for PETMiner, which is innovative because of its ability to display detailed images of the samples as data points, directly plot multiple sample properties and derived measures for comparison, and substantially reduce interaction cost. An 11-month evaluation demonstrated benefits across all three use cases by allowing a consultant to: (1) generate more accurate reservoir flow models, (2) discover a previously unknown relationship between one easy-to-measure property and another that is costly, and (3) make a 100-fold reduction in the time required to produce plots for a report

    Leveraging wall-sized high-resolution displays for comparative genomics analyses of copy number variation

    Get PDF
    The scale of comparative genomics data frequently overwhelms current data visualization methods on conventional (desktop) displays. This paper describes two types of solution that take advantage of wall-sized high-resolution displays (WHirDs), which have orders of magnitude more display real estate (i.e., pixels) than desktop displays. The first allows users to view detailed graphics of copy number variation (CNV) that were output by existing software. A WHirD's resolution allowed a 10× increase in the granularity of bioinformatics output that was feasible for users to visually analyze, and this revealed a pattern that had previously been smoothed out from the underlying data. The second involved interactive visualization software that was innovative because it uses a music score metaphor to lay out CNV data, overcomes a perceptual distortion caused by amplification/deletion thresholds, uses filtering to reduce graphical data overload, and is the first comparative genomics visualization software that is designed to leverage a WHirD's real estate. In a field evaluation, a clinical user discovered a fundamental error in the way their data had been processed, and established confidence in the software by using it to 'find' known genetic patterns in hepatitis C-driven hepatocellular cancer

    Methods and a research agenda for the evaluation of event sequence visualization techniques

    Get PDF
    The present paper asks how can visualization help data scientists make sense of event sequences, and makes three main contributions. The first is a research agenda, which we divide into methods for presentation, interaction & computation, and scale-up. Second, we introduce the concept of Event Maps to help with scale-up, and illustrate coarse-, medium- and fine-grained Event Maps with electronic health record (EHR) data for prostate cancer. Third, in an experiment we investigated participants’ ability to judge the similarity of event sequences. Contrary to previous research into categorical data, color and shape were better than position for encoding event type. However, even with simple sequences (5 events of 3 types in the target sequence), participants only got 88% correct despite averaging 7.4 seconds to respond. This indicates that simple visualization techniques are not effective

    The design and evaluation of interfaces for navigating gigapixel images in digital pathology

    Get PDF
    This paper describes the design and evaluation of two generations of an interface for navigating datasets of gigapixel images that pathologists use to diagnose cancer. The interface design is innovative because users panned with an overview:detail view scale difference that was up to 57 times larger than established guidelines, and 1 million pixel ‘thumbnail’ overviews that leveraged the real-estate of high resolution workstation displays. The research involved experts performing real work (pathologists diagnosing cancer), using datasets that were up to 3150 times larger than those used in previous studies that involved navigating images. The evaluation provides evidence about the effectiveness of the interfaces, and characterizes how experts navigate gigapixel images when performing real work. Similar interfaces could be adopted in applications that use other types of high-resolution images (e.g., remote sensing or highthroughput microscopy)

    The Effect of Alignment on Peoples Ability to Judge Event Sequence Similarity

    Get PDF
    Event sequences are central to the analysis of data in domains that range from biology and health, to logfile analysis and people's everyday behavior. Many visualization tools have been created for such data, but people are error-prone when asked to judge the similarity of event sequences with basic presentation methods. This paper describes an experiment that investigates whether local and global alignment techniques improve people's performance when judging sequence similarity. Participants were divided into three groups (basic vs. local vs. global alignment), and each participant judged the similarity of 180 sets of pseudo-randomly generated sequences. Each set comprised a target, a correct choice and a wrong choice. After training, the global alignment group was more accurate than the local alignment group (98% vs. 93% correct), with the basic group getting 95% correct. Participants' response times were primarily affected by the number of event types, the similarity of sequences (measured by the Levenshtein distance) and the edit types (nine combinations of deletion, insertion and substitution). In summary, global alignment is superior and people's performance could be further improved by choosing alignment parameters that explicitly penalize sequence mismatches

    How, in what contexts, and why do quality dashboards lead to improvements in care quality in acute hospitals? Protocol for a realist feasibility evaluation

    Get PDF
    Introduction: National audits are used to monitor care quality and safety and are anticipated to reduce unexplained variations in quality by stimulating quality improvement (QI). However, variation within and between providers in the extent of engagement with national audits means that the potential for national audit data to inform QI is not being realised. This study will undertake a feasibility evaluation of QualDash, a quality dashboard designed to support clinical teams and managers to explore data from two national audits, the Myocardial Ischaemia National Audit Project (MINAP) and the Paediatric Intensive Care Audit Network (PICANet). Methods and analysis: Realist evaluation, which involves building, testing and refining theories of how an intervention works, provides an overall framework for this feasibility study. Realist hypotheses that describe how, in what contexts, and why QualDash is expected to provide benefit will be tested across five hospitals. A controlled interrupted time series analysis, using key MINAP and PICANet measures, will provide preliminary evidence of the impact of QualDash, while ethnographic observations and interviews over 12 months will provide initial insight into contexts and mechanisms that lead to those impacts. Feasibility outcomes include the extent to which MINAP and PICANet data are used, data completeness in the audits, and the extent to which participants perceive QualDash to be useful and express the intention to continue using it after the study period. Ethics and dissemination: The study has been approved by the University of Leeds School of Healthcare Research Ethics Committee. Study results will provide an initial understanding of how, in what contexts, and why quality dashboards lead to improvements in care quality. These will be disseminated to academic audiences, study participants, hospital IT departments and national audits. If the results show a trial is feasible, we will disseminate the QualDash software through a stepped wedge cluster randomised trial

    Visual parameter optimisation for biomedical image processing

    Get PDF
    Background: Biomedical image processing methods require users to optimise input parameters to ensure high quality output. This presents two challenges. First, it is difficult to optimise multiple input parameters for multiple input images. Second, it is difficult to achieve an understanding of underlying algorithms, in particular, relationships between input and output. Results: We present a visualisation method that transforms users’ ability to understand algorithm behaviour by integrating input and output, and by supporting exploration of their relationships. We discuss its application to a colour deconvolution technique for stained histology images and show how it enabled a domain expert to identify suitable parameter values for the deconvolution of two types of images, and metrics to quantify deconvolution performance. It also enabled a breakthrough in understanding by invalidating an underlying assumption about the algorithm. Conclusions: The visualisation method presented here provides analysis capability for multiple inputs and outputs in biomedical image processing that is not supported by previous analysis software. The analysis supported by our method is not feasible with conventional trial-and-error approaches

    The development of path integration: combining estimations of distance and heading

    Get PDF
    Efficient daily navigation is underpinned by path integration, the mechanism by which we use self-movement information to update our position in space. This process is well-understood in adulthood, but there has been relatively little study of path integration in childhood, leading to an underrepresentation in accounts of navigational development. Previous research has shown that calculation of distance and heading both tend to be less accurate in children as they are in adults, although there have been no studies of the combined calculation of distance and heading that typifies naturalistic path integration. In the present study 5-year-olds and 7-year-olds took part in a triangle-completion task, where they were required to return to the startpoint of a multi-element path using only idiothetic information. Performance was compared to a sample of adult participants, who were found to be more accurate than children on measures of landing error, heading error, and distance error. 7-year-olds were significantly more accurate than 5-year-olds on measures of landing error and heading error, although the difference between groups was much smaller for distance error. All measures were reliably correlated with age, demonstrating a clear development of path integration abilities within the age range tested. Taken together, these data make a strong case for the inclusion of path integration within developmental models of spatial navigational processing
    • 

    corecore